robot assistant
ActionSense: A Multimodal Dataset and Recording Framework for Human Activities Using Wearable Sensors in a Kitchen Environment
This paper introduces ActionSense, a multimodal dataset and recording framework with an emphasis on wearable sensing in a kitchen environment. It provides rich, synchronized data streams along with ground truth data to facilitate learning pipelines that could extract insights about how humans interact with the physical world during activities of daily living, and help lead to more capable and collaborative robot assistants. The wearable sensing suite captures motion, force, and attention information; it includes eye tracking with a first-person camera, forearm muscle activity sensors, a body-tracking system using 17 inertial sensors, finger-tracking gloves, and custom tactile sensors on the hands that use a matrix of conductive threads. This is coupled with activity labels and with externally-captured data from multiple RGB cameras, a depth camera, and microphones. The specific tasks recorded in ActionSense are designed to highlight lower-level physical skills and higher-level scene reasoning or action planning. They include simple object manipulations (e.g., stacking plates), dexterous actions (e.g., peeling or cutting vegetables), and complex action sequences (e.g., setting a table or loading a dishwasher).
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Vision (0.95)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > Oregon (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China > Guangxi Province > Nanning (0.04)
ActionSense: A Multimodal Dataset and Recording Framework for Human Activities Using Wearable Sensors in a Kitchen Environment
Joseph DelPreto, Chao Liu, Yiyue Luo, Michael Foshey, Yunzhu Li, Antonio Torralba, Wojciech Matusik, Daniela Rus
The wearable sensing suite captures motion, force, and attention information; it includes eye tracking with a first-person camera, forearm muscle activity sensors, a body-tracking system using 17 inertial sensors, finger-tracking gloves, and custom tactile sensors on the hands that use a matrix of conductive threads. This is coupled with activity labels and with externally-captured data from multiple RGB cameras, a depth camera, and microphones.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > Oregon (0.04)
- Asia > China > Guangxi Province > Nanning (0.04)
- Information Technology > Human Computer Interaction > Interfaces (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots > Manipulation (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
Learning to Plan with Personalized Preferences
Xu, Manjie, Yang, Xinyi, Liang, Wei, Zhang, Chi, Zhu, Yixin
Effective integration of AI agents into daily life requires them to understand and adapt to individual human preferences, particularly in collaborative roles. Although recent studies on embodied intelligence have advanced significantly, they typically adopt generalized approaches that overlook personal preferences in planning. We address this limitation by developing agents that not only learn preferences from few demonstrations but also learn to adapt their planning strategies based on these preferences. Our research leverages the observation that preferences, though implicitly expressed through minimal demonstrations, can generalize across diverse planning scenarios. To systematically evaluate this hypothesis, we introduce Preference-based Planning (PbP) benchmark, an embodied benchmark featuring hundreds of diverse preferences spanning from atomic actions to complex sequences. Our evaluation of SOTA methods reveals that while symbol-based approaches show promise in scalability, significant challenges remain in learning to generate and execute plans that satisfy personalized preferences. We further demonstrate that incorporating learned preferences as intermediate representations in planning significantly improves the agent's ability to construct personalized plans. These findings establish preferences as a valuable abstraction layer for adaptive planning, opening new directions for research in preference-guided plan generation and execution.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > China > Hubei Province > Wuhan (0.04)
- Law (0.67)
- Information Technology (0.46)
- Government (0.46)
ActionSense: A Multimodal Dataset and Recording Framework for Human Activities Using Wearable Sensors in a Kitchen Environment
This paper introduces ActionSense, a multimodal dataset and recording framework with an emphasis on wearable sensing in a kitchen environment. It provides rich, synchronized data streams along with ground truth data to facilitate learning pipelines that could extract insights about how humans interact with the physical world during activities of daily living, and help lead to more capable and collaborative robot assistants. The wearable sensing suite captures motion, force, and attention information; it includes eye tracking with a first-person camera, forearm muscle activity sensors, a body-tracking system using 17 inertial sensors, finger-tracking gloves, and custom tactile sensors on the hands that use a matrix of conductive threads. This is coupled with activity labels and with externally-captured data from multiple RGB cameras, a depth camera, and microphones. The specific tasks recorded in ActionSense are designed to highlight lower-level physical skills and higher-level scene reasoning or action planning.
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Vision (0.98)
Improved Trust in Human-Robot Collaboration with ChatGPT
Ye, Yang, You, Hengxu, Du, Jing
Human robot collaboration is becoming increasingly important as robots become more involved in various aspects of human life in the era of Artificial Intelligence. However, the issue of human operators trust in robots remains a significant concern, primarily due to the lack of adequate semantic understanding and communication between humans and robots. The emergence of Large Language Models (LLMs), such as ChatGPT, provides an opportunity to develop an interactive, communicative, and robust human-robot collaboration approach. This paper explores the impact of ChatGPT on trust in a human-robot collaboration assembly task. This study designs a robot control system called RoboGPT using ChatGPT to control a 7-degree-of-freedom robot arm to help human operators fetch, and place tools, while human operators can communicate with and control the robot arm using natural language. A human-subject experiment showed that incorporating ChatGPT in robots significantly increased trust in human-robot collaboration, which can be attributed to the robot's ability to communicate more effectively with humans. Furthermore, ChatGPT ability to understand the nuances of human language and respond appropriately helps to build a more natural and intuitive human-robot interaction. The findings of this study have significant implications for the development of human-robot collaboration systems.
- North America > United States > Florida > Orange County > Orlando (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Information Technology > Artificial Intelligence > Robots > Humanoid Robots (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Robot assistants in the operating room promise safer surgery
Advanced robotics can help surgeons carry out procedures where there is little margin for error. In a surgery in India, a robot scans a patient's knee to figure out how best to carry out a joint replacement. Meanwhile, in an operating room in the Netherlands, another robot is performing highly challenging microsurgery under the control of a doctor using joysticks. Such scenarios look set to become more common. At present, some manual operations are so difficult they can be performed by only a small number of surgeons worldwide, while others are invasive and depend on a surgeon's specific skill.
- Asia > India (0.26)
- North America > United States (0.14)
- Europe > Netherlands > North Brabant > Eindhoven (0.05)
- Health & Medicine > Surgery (1.00)
- Health & Medicine > Therapeutic Area > Oncology (0.31)
Older Adults' Task Preferences for Robot Assistance in the Home
Ajaykumar, Gopika, Huang, Chien-Ming
Artificial intelligence technologies that can assist with at-home tasks have the potential to help older adults age in place. Robot assistance in particular has been applied towards physical and cognitive support for older adults living independently at home. Surveys, questionnaires, and group interviews have been used to understand what tasks older adults want robots to assist them with. We build upon prior work exploring older adults' task preferences for robot assistance through field interviews situated within older adults' aging contexts. Our findings support results from prior work indicating older adults' preference for physical assistance over social and care-related support from robots and indicating their preference for control when adopting robot assistance, while highlighting the variety of individual constraints, boundaries, and needs that may influence their preferences.
- Questionnaire & Opinion Survey (0.87)
- Research Report > New Finding (0.35)
Development of a mobile robot assistant for wind turbines manufacturing
The thrust for increased rating capacity of wind turbines has resulted into larger generators, longer blades, and taller towers. Presently, up to 16 MW wind turbines are being offered by wind turbines manufacturers which is nearly a 60 percent increase in the design capacity over the last five years. Manufacturing of these turbines involves assembling of gigantic sized components. Due to the frequent design changes and the variety of tasks involved, conventional automation is not possible making it a labor-intensive activity. However the handling and assembling of large components are challenging the human capabilities. The article proposes the use of mobile robotic assistants for partial automation of wind turbines manufacturing. The robotic assistant can result into reducing production costs, and better work conditions. The article presents development of a robot assistant for human operators to effectively perform assembly of wind turbines. The case is from a leading wind turbines manufacturer. The developed system is also applicable to other cases of large component manufacturing involving intensive manual effort.
- North America > United States > Michigan (0.04)
- Asia > Japan (0.04)
- Europe > Sweden (0.04)
- (2 more...)
- Energy > Renewable > Wind (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)